Is the moral of this that threats of simulated torture don’t matter because a UFAI will almost certainly be powerful enough to just torture the real you anyways?
One of the morals, yes. (Though I’d debate “almost certainly”. Safer would be “the power to torture acausally implies the power to torture causally”.)
Not always and everywhere: you could be causally safe from an AI whose future lightcone doesn’t intersect your own, but not acausally safe.
For practical purposes, you’re almost certainly right.
Is the moral of this that threats of simulated torture don’t matter because a UFAI will almost certainly be powerful enough to just torture the real you anyways?
One of the morals, yes. (Though I’d debate “almost certainly”. Safer would be “the power to torture acausally implies the power to torture causally”.)
Not always and everywhere: you could be causally safe from an AI whose future lightcone doesn’t intersect your own, but not acausally safe.
For practical purposes, you’re almost certainly right.